我们引入了一个球形指尖传感器进行动态操作。它基于气压压力和飞行时间接近传感器,并且是低延迟,紧凑且身体健壮的。传感器使用训练有素的神经网络根据压力传感器的数据来估计接触位置和三轴接触力,这些数据嵌入了传感器的聚氨酯橡胶范围内。飞行器传感器朝三个不同的外向方向面对,并且一个集成的微控制器样品以200 Hz的速度每个单个传感器。为了量化系统潜伏期对动态操作性能的影响,我们开发和分析了一个称为碰撞脉冲比率的度量,并表征了我们新传感器的端到端潜伏期。我们还向传感器提出了实验演示,包括测量接触过渡,进行粗大映射,与移动物体保持接触力以及避免碰撞的反应。
translated by 谷歌翻译
现代的机器人操纵系统缺乏人类的操纵技巧,部分原因是它们依靠围绕视觉数据的关闭反馈循环,这会降低系统的带宽和速度。通过开发依赖于高带宽力,接触和接近数据的自主握力反射,可以提高整体系统速度和鲁棒性,同时减少对视力数据的依赖。我们正在开发一个围绕低渗透的高速手臂建造的新系统,该系统用敏捷的手指结合了一个高级轨迹计划器,以小于1 Hz的速度运行,低级自主反射控制器的运行量超过300 Hz。我们通过将成功的基线控制器和反射握把控制器的变化的成功抓Grasps的体积和反射系统的体积进行比较,从而表征了反射系统,发现我们的控制器将成功的掌握率与基线相比扩大了55%。我们还使用简单的基于视觉的计划者在自主杂波清除任务中部署了反身抓握控制器,在清除100多个项目的同时,达到了超过90%的成功率。
translated by 谷歌翻译
我们提出了一个本体感受的远程操作系统,该系统使用反身握把算法来增强拾取任务的速度和稳健性。该系统由两个使用准直接驱动驱动的操纵器组成,以提供高度透明的力反馈。末端效应器具有双峰力传感器,可测量3轴力信息和2维接触位置。此信息用于防滑和重新磨碎反射。当用户与所需对象接触时,重新抓紧反射将抓地力的手指与对象上的抗肌点对齐,以最大程度地提高抓握稳定性。反射仅需150毫秒即可纠正用户选择的不准确的grasps,因此用户的运动仅受到Re-Grasp的执行的最小干扰。一旦建立了抗焦点接触,抗滑动反射将确保抓地力施加足够的正常力来防止物体从抓地力中滑出。本体感受器的操纵器和反射抓握的结合使用户可以高速完成远程操作的任务。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
We present a Machine Learning (ML) study case to illustrate the challenges of clinical translation for a real-time AI-empowered echocardiography system with data of ICU patients in LMICs. Such ML case study includes data preparation, curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs and model selection, validation and deployment of three thinner neural networks to classify apical four-chamber view. Results of the ML heuristics showed the promising implementation, validation and application of thinner networks to classify 4CV with limited datasets. We conclude this work mentioning the need for (a) datasets to improve diversity of demographics, diseases, and (b) the need of further investigations of thinner models to be run and implemented in low-cost hardware to be clinically translated in the ICU in LMICs. The code and other resources to reproduce this work are available at https://github.com/vital-ultrasound/ai-assisted-echocardiography-for-low-resource-countries.
translated by 谷歌翻译
The ability to jointly learn from multiple modalities, such as text, audio, and visual data, is a defining feature of intelligent systems. While there have been promising advances in designing neural networks to harness multimodal data, the enormous success of data augmentation currently remains limited to single-modality tasks like image classification. Indeed, it is particularly difficult to augment each modality while preserving the overall semantic structure of the data; for example, a caption may no longer be a good description of an image after standard augmentations have been applied, such as translation. Moreover, it is challenging to specify reasonable transformations that are not tailored to a particular modality. In this paper, we introduce LeMDA, Learning Multimodal Data Augmentation, an easy-to-use method that automatically learns to jointly augment multimodal data in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that LeMDA can (1) profoundly improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data.
translated by 谷歌翻译
The SINDy algorithm has been successfully used to identify the governing equations of dynamical systems from time series data. In this paper, we argue that this makes SINDy a potentially useful tool for causal discovery and that existing tools for causal discovery can be used to dramatically improve the performance of SINDy as tool for robust sparse modeling and system identification. We then demonstrate empirically that augmenting the SINDy algorithm with tools from causal discovery can provides engineers with a tool for learning causally robust governing equations.
translated by 谷歌翻译
Our aim is to build autonomous agents that can solve tasks in environments like Minecraft. To do so, we used an imitation learning-based approach. We formulate our control problem as a search problem over a dataset of experts' demonstrations, where the agent copies actions from a similar demonstration trajectory of image-action pairs. We perform a proximity search over the BASALT MineRL-dataset in the latent representation of a Video PreTraining model. The agent copies the actions from the expert trajectory as long as the distance between the state representations of the agent and the selected expert trajectory from the dataset do not diverge. Then the proximity search is repeated. Our approach can effectively recover meaningful demonstration trajectories and show human-like behavior of an agent in the Minecraft environment.
translated by 谷歌翻译